Current Issue : July - September Volume : 2012 Issue Number : 3 Articles : 5 Articles
By emerging cloud computing, organizations utilize\r\nthis new technology by consuming cloud services based ondemand.\r\nHowever, they must put their data and processes on a\r\ncloud, therefore; they do not have enough control on their data\r\nand they must map their access control policies on access control\r\npolicies of a cloud service. Also, some aspects of this technology\r\nlike interoperability, multi-tenancy, continuous access control are\r\nnot supported by traditional approaches. The usage control\r\nmodel with two important specifications like continuous access\r\ncontrol and attribute mutability are more compatible with\r\nsecurity requirements of cloud computing. In this paper, a three\r\nlayer access control based on the usage control for could services\r\nhas been proposed, in which separation of duties can support the\r\nmulti-tenancy and the least privilege principle....
Background: In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute,\r\nmechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance\r\nISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a\r\npreviously validated ISL and initiated re-validated experiments that required scaling experiments to use more\r\nsimulated lobules than previously, more than could be achieved using the local cluster technology. Rather than\r\ndramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon\r\nEC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster\r\nnodes and assessing the scientific equivalence of local cluster validation experiments with those executed using\r\nthe cloud platform.\r\nResults: The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling\r\nprotocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated\r\nlobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was\r\ndemonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more\r\nsamples. The process was analogous to demonstration of results equivalency from two different wet-labs.\r\nConclusions: The results provide additional evidence that disposition simulations using ISLs can cover the\r\nbehavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype\r\nsimilarity). The scientific value of experimenting with multiscale biomedical models has been limited to research\r\ngroups with access to computer clusters. The availability of cloud technology coupled with the evidence of\r\nscientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide\r\nstraightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware....
Background: Large comparative genomics studies and tools are becoming increasingly more compute-expensive as\r\nthe number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures\r\nare likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative\r\ncomputing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and\r\nenable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned\r\na typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon''s\r\nElastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of\r\nfully sequenced genomes.\r\nResults: We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to\r\n100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large\r\nand small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD.\r\nConclusions: The effort to transform existing comparative genomics algorithms from local compute infrastructures is\r\nnot trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with\r\nmanageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily\r\nadaptable to similar comparative genomics problems....
Cloud computing is a computing standard, where a large pool of systems are connected in private or public networks. Cloud computing provide dynamically scalable infrastructure for application, data and file storage. The cost of computation, application hosting, content storage and delivery is reduced significantly with the advent of this technology. Cloud computing is a practical approach which experience a data center from a capital-concentrated set up to a inconsistent priced environment. This paper shows the algorithms for creating the small cloud using CloudSim, and some key feature of saving the energy in cloud with the help of migration of virtual machines in between data centers. The idle datacenter consumes the significant amount of energy which becomes the problematic for the data center....
Background: There is a significant demand for creating pipelines or workflows in the life science discipline that\r\nchain a number of discrete compute and data intensive analysis tasks into sophisticated analysis procedures. This\r\nneed has led to the development of general as well as domain-specific workflow environments that are either\r\ncomplex desktop applications or Internet-based applications. Complexities can arise when configuring these\r\napplications in heterogeneous compute and storage environments if the execution and data access models are\r\nnot designed appropriately. These complexities manifest themselves through limited access to available HPC\r\nresources, significant overhead required to configure tools and inability for users to simply manage files across\r\nheterogenous HPC storage infrastructure.\r\nResults: In this paper, we describe the architecture of a software system that is adaptable to a range of both\r\npluggable execution and data backends in an open source implementation called Yabi. Enabling seamless and\r\ntransparent access to heterogenous HPC environments at its core, Yabi then provides an analysis workflow\r\nenvironment that can create and reuse workflows as well as manage large amounts of both raw and processed\r\ndata in a secure and flexible way across geographically distributed compute resources. Yabi can be used via a\r\nweb-based environment to drag-and-drop tools to create sophisticated workflows. Yabi can also be accessed\r\nthrough the Yabi command line which is designed for users that are more comfortable with writing scripts or for\r\nenabling external workflow environments to leverage the features in Yabi. Configuring tools can be a significant\r\noverhead in workflow environments. Yabi greatly simplifies this task by enabling system administrators to configure\r\nas well as manage running tools via a web-based environment and without the need to write or edit software\r\nprograms or scripts. In this paper, we highlight Yabi�s capabilities through a range of bioinformatics use cases that\r\narise from large-scale biomedical data analysis.\r\nConclusion: The Yabi system encapsulates considered design of both execution and data models, while\r\nabstracting technical details away from users who are not skilled in HPC and providing an intuitive drag-and-drop\r\nscalable web-based workflow environment where the same tools can also be accessed via a command line. Yabi is\r\ncurrently in use and deployed at multiple institutions and is available at http://ccg.murdoch.edu.au/yabi....
Loading....